32 research outputs found

    Modulation parameter estimation of LFM interference for direct sequence spread spectrum communication system in alpha-stable noise

    Get PDF
    The linear frequency modulation (LFM) interference is one of the typical broadband interferences in direct sequence spread spectrum (DSSS) communication system. In this article, a novel modulation parameter estimation method of LFM interference is proposed for the DSSS communication system in alpha-stable noise. To accurately estimate the modulation parameters, the alpha-stable noise should be eliminated first. Thus, we formulate a new generalized extended linear chirplet transform to suppress the alpha-stable noise, for a robust time-frequency, transformation of LFM interference is realized. Then, using the Radon transform, the maximum value after transformation and the chirp rate according to the angle related to the maximum value are estimated. In addition, a generalized Fourier transform is introduced to estimate the initial frequency of the LFM interference. For the performance analysis, the Cramér-Rao lower bounds of the estimated chirp rate and the initial frequency of the LFM interference in the presence of alpha-stable noise are derived. Moreover, the asymptotic properties of the modulation parameter estimator are analyzed. Simulation results demonstrate that the performance of the proposed parameter estimation method significantly outperforms existing methods, especially in a low SNR regime

    Exploring the Design Space of Immersive Urban Analytics

    Full text link
    Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.Comment: 23 pages,11 figure

    A Survey on ML4VIS: Applying Machine Learning Advances to Data Visualization

    Full text link
    Inspired by the great success of machine learning (ML), researchers have applied ML techniques to visualizations to achieve a better design, development, and evaluation of visualizations. This branch of studies, known as ML4VIS, is gaining increasing research attention in recent years. To successfully adapt ML techniques for visualizations, a structured understanding of the integration of ML4VISis needed. In this paper, we systematically survey 88 ML4VIS studies, aiming to answer two motivating questions: "what visualization processes can be assisted by ML?" and "how ML techniques can be used to solve visualization problems?" This survey reveals seven main processes where the employment of ML techniques can benefit visualizations:Data Processing4VIS, Data-VIS Mapping, InsightCommunication, Style Imitation, VIS Interaction, VIS Reading, and User Profiling. The seven processes are related to existing visualization theoretical models in an ML4VIS pipeline, aiming to illuminate the role of ML-assisted visualization in general visualizations.Meanwhile, the seven processes are mapped into main learning tasks in ML to align the capabilities of ML with the needs in visualization. Current practices and future opportunities of ML4VIS are discussed in the context of the ML4VIS pipeline and the ML-VIS mapping. While more studies are still needed in the area of ML4VIS, we hope this paper can provide a stepping-stone for future exploration. A web-based interactive browser of this survey is available at https://ml4vis.github.ioComment: 19 pages, 12 figures, 4 table

    A joint multi user detection scheme for UWB sensor networks using waveform division multiple access

    Get PDF
    A joint multiuser detection (MUD) scheme for wireless sensor networks (WSNs) is proposed to suppress multiple access interference (MAI) caused by a large number of sensor nodes. In WSNs, waveform division multiple access ultra-wideband (WDMA-UWB) technology is well-suited for robust communications. Multiple sensor nodes are allowed to transmit modulated signals by sharing the same time periods and frequency bands using orthogonal pulse waveforms. This paper employs a mapping function based on the optimal multiuser detection (OMD) to map the received bits into the mapping space where error bits can be distinguished. In order to revise error bits caused by MAI, the proposed joint MUD scheme combines the mapping function with suboptimal algorithms. Numerical results demonstrate that the proposed MUD scheme provides good performances in terms of suppressing MAI and resisting near-far effect with low computational complexity

    Towards automated infographic design: Deep learning-based auto-extraction of extensible timeline

    Get PDF
    Designers need to consider not only perceptual effectiveness but also visual styles when creating an infographic. This process can be difficult and time consuming for professional designers, not to mention non-expert users, leading to the demand for automated infographics design. As a first step, we focus on timeline infographics, which have been widely used for centuries. We contribute an end-to-end approach that automatically extracts an extensible timeline template from a bitmap image. Our approach adopts a deconstruction and reconstruction paradigm. At the deconstruction stage, we propose a multi-task deep neural network that simultaneously parses two kinds of information from a bitmap timeline: 1) the global information, i.e., the representation, scale, layout, and orientation of the timeline, and 2) the local information, i.e., the location, category, and pixels of each visual element on the timeline. At the reconstruction stage, we propose a pipeline with three techniques, i.e., Non-Maximum Merging, Redundancy Recover, and DL GrabCut, to extract an extensible template from the infographic, by utilizing the deconstruction results. To evaluate the effectiveness of our approach, we synthesize a timeline dataset (4296 images) and collect a real-world timeline dataset (393 images) from the Internet. We first report quantitative evaluation results of our approach over the two datasets. Then, we present examples of automatically extracted templates and timelines automatically generated based on these templates to qualitatively demonstrate the performance. The results confirm that our approach can effectively extract extensible templates from real-world timeline infographics.Comment: 10 pages, Automated Infographic Design, Deep Learning-based Approach, Timeline Infographics, Multi-task Mode

    A robust modulation classification method using convolutional neural networks

    Get PDF
    Automatic modulation classification (AMC) is a core technique in noncooperative communication systems. In particular, feature-based (FB) AMC algorithms have been widely studied. Current FB AMC methods are commonly designed for a limited set of modulation and lack of generalization ability; to tackle this challenge, a robust AMC method using convolutional neural networks (CNN) is proposed in this paper. In total, 15 different modulation types are considered. The proposed method can classify the received signal directly without feature extracion, and it can automatically learn features from the received signals. The features learned by the CNN are presented and analyzed. The robust features of the received signals in a specific SNR range are studied. The accuracy of classification using CNN is shown to be remarkable, particularly for low SNRs. The generalization ability of robust features is also proven to be excellent using the support vector machine (SVM). Finally, to help us better understand the process of feature learning, some outputs of intermediate layers of the CNN are visualized

    Visual analysis of discrimination in machine learning

    Get PDF
    The growing use of automated decision-making in critical applications, such as crime prediction and college admission, has raised questions about fairness in machine learning. How can we decide whether different treatments are reasonable or discriminatory? In this paper, we investigate discrimination in machine learning from a visual analytics perspective and propose an interactive visualization tool, DiscriLens, to support a more comprehensive analysis. To reveal detailed information on algorithmic discrimination, DiscriLens identifies a collection of potentially discriminatory itemsets based on causal modeling and classification rules mining. By combining an extended Euler diagram with a matrix-based visualization, we develop a novel set visualization to facilitate the exploration and interpretation of discriminatory itemsets. A user study shows that users can interpret the visually encoded information in DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens provides informative guidance in understanding and reducing algorithmic discrimination

    VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching

    Full text link
    Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence.Comment: To Appear in IEEE Transactions on Visualization and Computer Graphics (IEEE VIS), 202

    LassoNet:Deep Lasso-Selection of 3D Point Clouds

    Get PDF
    Selection is a fundamental task in exploratory analysis and visualization of 3D point clouds. Prior researches on selection methods were developed mainly based on heuristics such as local point density, thus limiting their applicability in general data. Specific challenges root in the great variabilities implied by point clouds (e.g., dense vs. sparse), viewpoint (e.g., occluded vs. non-occluded), and lasso (e.g., small vs. large). In this work, we introduce LassoNet, a new deep neural network for lasso selection of 3D point clouds, attempting to learn a latent mapping from viewpoint and lasso to point cloud regions. To achieve this, we couple user-target points with viewpoint and lasso information through 3D coordinate transform and naive selection, and improve the method scalability via an intention filtering and farthest point sampling. A hierarchical network is trained using a dataset with over 30K lasso-selection records on two different point cloud data. We conduct a formal user study to compare LassoNet with two state-of-the-art lasso-selection methods. The evaluations confirm that our approach improves the selection effectiveness and efficiency across different combinations of 3D point clouds, viewpoints, and lasso selections. Project Website: https://lassonet.github.ioComment: 10 page
    corecore